43 research outputs found
Shrieking in the face of vengeance
Paraconsistent dialetheism is the view that some contradictions are true and that the inference rule ex falso quod libet (a.k.a. explosion) is invalid. A long-standing problem for paraconsistent dialetheism is that it has difficulty making sense of situations where people use locutions like âjust trueâ and âjust falseâ. Jc Beall recently advocated a general strategy, which he terms shrieking, for solving this problem and thereby strengthening the case for paraconsistent dialetheism. However, Beallâs strategy fails, and seeing why it fails brings into greater focus just how daunting the just-true problem is for the dialetheist.PostprintPeer reviewe
Conceptual engineering for truth : aletheic properties and new aletheic concepts
What is the property of being true like? To answer this question, begin with a Canberra-plan analysis of the concept of truth. That is, assemble the platitudes for the concept of truth, and then investigate which property might satisfy them. This project is aided by Friedman and Sheardâs groundbreaking analysis of twelve logical platitudes for truth. It turns out that, because of the paradoxes like the liar, the platitudes for the concept of truth are inconsistent. Moreover, there are so many distinct paradoxes that only small subsets of platitudes for truth are consistent. The result is that there is no property of being true. The failure of the Canberra plan analysis of the concept of truth, points the way toward a new methodology: a conceptual engineering project for the concept of truth. Conceptual engineering is assessing the quality of our concepts, and when they are found defective, offering new and better concepts to replace them for certain purposes. Still, there are many aletheic properties, which are properties satisfied by reasonably large subsets of platitudes for the concept of truth. We can treat these aletheic properties as a guide to the multitude of new aletheic concepts, which are concepts similar to, but distinct from, the concept of truth. Any new aletheic concept or team of concepts might be called on to replace the concept of truth. In particular, the concepts of ascending truth and descending truth are recommended, but the most important point is that we need a full-scale investigation into the space of aletheic properties and new aletheic conceptsâthat is, we need an Aletheic Principles Project (APP).Publisher PDFPeer reviewe
Analytic Pragmatism and universal LX vocabulary
In his recent John Locke Lectures â published as Between Saying and Doing â Brandom extends and refines his views on the nature of language and philosophy by developing a position that he calls Analytic Pragmatism. Although Brandomâs project bears on an extraordinarily rich array of different philosophical issues, we focus here on the contention that certain vocabularies have a privileged status within our linguistic practices, and that when adequately understood, the practices in which these vocabularies figure can help furnish us with an account of semantic intentionality. Brandomâs claim is that such vocabularies are privileged because they are a species of what he calls universal LX vocabulary âroughly, vocabulary whose mastery is implicit in any linguistic practice whatsoever. We show that, contrary to Brandomâs claim, logical vocabulary per se fails to satisfy the conditions that must be met for something to count as universal LX vocabulary. Further, we show that exactly analogous considerations undermine his claim that modal vocabulary is universal LX. If our arguments are sound, then, contrary to what Brandom maintains, intentionality cannot be explicated as a âpragmatically mediated semantic phenomenonâ, at any rate not of the sort that he proposes.Publisher PDFPeer reviewe
Replacing truth
Kevin Scharp proposes an original account of the nature and logic of truth, on which truth is an inconsistent concept that should be replaced for certain theoretical purposes. He argues that truth is best understood as an inconsistent concept; develops an axiomatic theory of truth; and offers a new kind of possible-worlds semantics for this theory
On the indeterminacy of the meter
In the International System of Units (SI), âmeterâ is defined in terms of seconds and the speed of light, and âsecondâ is defined in terms of properties of cesium 133 atoms. I show that one consequence of these definitions is that: if there is a minimal length (e.g., Planck length), then the chances that âmeterâ is completely determinate are only 1 in 21,413,747. Moreover, we have good reason to believe that there is a minimal length. Thus, it is highly probable that âmeterâ is indeterminate. If the meter is indeterminate, then any unit in the SI system that is defined in terms of the meter is indeterminate as well. This problem affects most of the familiar derived units in SI. As such, it is highly likely that indeterminacy pervades the SI system. The indeterminacy of the meter is compared and contrasted with emerging literature on indeterminacy in measurement locutions (as in Eran Talâs recent argument that measurement units are vague in certain ways). Moreover, the indeterminacy of the meter has ramifications for the metaphysics of measurement (e.g., problems for widespread assumptions about the nature of conventionality, as in Theodore Siderâs Writing the Book of the World) and the semantics of measurement locutions (e.g., undermining the received view that measurement phrases are absolutely precise as in Christopher Kennedyâs and Louise McNallyâs semantics for gradable adjectives). Finally, it is shown how to redefine âmeterâ and âsecondâ to completely avoid the indeterminacy.Publisher PDFPeer reviewe
Replies to Bacon, Eklund, and Greenough on Replacing Truth
Andrew Bacon, Matti Eklund, and Patrick Greenough have individually proposed objections to the project in my book, Replacing Truth. Briefly, the book outlines a conceptual engineering project â our defective concept of truth is replaced for certain purposes with a team of concepts that can do some of the jobs we thought truth could do. Here, I respond to their objections and develop the views expressed in Replacing Truth in various ways.PostprintNon peer reviewe
The end of vagueness : technological epistemicism, surveillance capitalism, and explainable Artificial Intelligence
Artificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movementâExplainable Artificial Intelligence (XAI)âto develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAIâit has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled surveillance capitalism has resulted in humans quickly gaining the capability to identify and classify most of the occasions in which languages are used. We show that the knowability of this information is incompatible with what a certain theory of vaguenessâepistemicismâsays about vagueness. We argue that one way the epistemicist could respond to this threat is to claim that this process brought about the end of vagueness. However, we suggest an alternative interpretation, namely that epistemicism is false, but there is a weaker doctrine we dub technological epistemicism, which is the view that vagueness is due to ignorance of linguistic usage, but the ignorance can be overcome. The idea is that knowing more of the relevant data and how to process it enables us to know the semantic values of our words and sentences with higher confidence and precision. Finally, we argue that humans are probably not going to believe what future AI algorithms tell us about the sharp boundaries of our vague words unless the AI involved can be explained in terms understandable by humans. That is, if people are going to accept that AI can tell them about the sharp boundaries of the meanings of their words, then it is going to have to be XAI.Publisher PDFPeer reviewe
Truth and Aletheic Paradox
My objective is to provide a theory of truth that is both independently motivated and compatible with the requirement that semantic theories for truth should not demand a substantive distinction between the languages in which they are formulated and those to which they apply. I argue that if a semantic theory for truth does not satisfy this requirement, then it is unacceptable. The central claim of the theory I develop is that truth is an inconsistent concept: the rules for the proper use of truth are incompatible in the sense that they dictate that truth both applies and fails to apply to certain sentences (e.g., those that give rise to the liar and related paradoxes). The most significant challenge for a proponent of an inconsistency theory of truth is producing a plausible theory of inconsistent concepts. Accordingly, I first construct a theory of inconsistent concepts, and then I apply it to truth. On the account I provide, inconsistent concepts are confused concepts. A concept is confused if, in employing it, one is committed to applying it to two or more distinct types of entities without properly distinguishing between them; that is, an employer of a confused concept thinks that two or more distinct entities are identical. I propose a semantic theory for predicates that express confused concepts, and a new many-valued relevance logic on which the semantic theory depends. This semantic theory serves as the basis for my theory of inconsistent concepts. Given this account of inconsistent concepts and my claim that truth is inconsistent, I am committed to the view that truth is confused. I use the semantic theory for confused predicates as a semantic theory for truth. On the account I advance, a proper theory of truth requires a distinction between several different types of truth predicates. I propose an account of each truth predicate, and I advocate using them as consistent replacements for the concept of truth. The result is a team of concepts that does the work of the inconsistent concept of truth without giving rise to paradoxes